92 research outputs found

    Full Recovery from Point Values: an Optimal Algorithm for Chebyshev Approximability Prior

    Full text link
    Given pointwise samples of an unknown function belonging to a certain model set, one seeks in Optimal Recovery to recover this function in a way that minimizes the worst-case error of the recovery procedure. While it is often known that such an optimal recovery procedure can be chosen to be linear, e.g. when the model set is based on approximability by a subspace of continuous functions, a construction of the procedure is rarely available. This note uncovers a practical algorithm to construct a linear optimal recovery map when the approximation space is a Chevyshev space of dimension at least three and containing the constant functions

    Linearly Embedding Sparse Vectors from β„“2\ell_2 to β„“1\ell_1 via Deterministic Dimension-Reducing Maps

    Full text link
    This note is concerned with deterministic constructions of mΓ—Nm \times N matrices satisfying a restricted isometry property from β„“2\ell_2 to β„“1\ell_1 on ss-sparse vectors. Similarly to the standard (β„“2\ell_2 to β„“2\ell_2) restricted isometry property, such constructions can be found in the regime m≍s2m \asymp s^2, at least in theory. With effectiveness of implementation in mind, two simple constructions are presented in the less pleasing but still relevant regime m≍s4m \asymp s^4. The first one, executing a Las Vegas strategy, is quasideterministic and applies in the real setting. The second one, exploiting Golomb rulers, is explicit and applies to the complex setting. As a stepping stone, an explicit isometric embedding from β„“2n(C)\ell_2^n(\mathbb{C}) to β„“4cn2(C)\ell_4^{cn^2}(\mathbb{C}) is presented. Finally, the extension of the problem from sparse vectors to low-rank matrices is raised as an open question

    Near-Optimal Estimation of Linear Functionals with Log-Concave Observation Errors

    Full text link
    This note addresses the question of optimally estimating a linear functional of an object acquired through linear observations corrupted by random noise, where optimality pertains to a worst-case setting tied to a symmetric, convex, and closed model set containing the object. It complements the article "Statistical Estimation and Optimal Recovery" published in the Annals of Statistics in 1994. There, Donoho showed (among other things) that, for Gaussian noise, linear maps provide near-optimal estimation schemes relatively to a performance measure relevant in Statistical Estimation. Here, we advocate for a different performance measure arguably more relevant in Optimal Recovery. We show that, relatively to this new measure, linear maps still provide near-optimal estimation schemes even if the noise is merely log-concave. Our arguments, which make a connection to the deterministic noise situation and bypass properties specific to the Gaussian case, offer an alternative to parts of Donoho's proof

    On the Optimal Recovery of Graph Signals

    Full text link
    Learning a smooth graph signal from partially observed data is a well-studied task in graph-based machine learning. We consider this task from the perspective of optimal recovery, a mathematical framework for learning a function from observational data that adopts a worst-case perspective tied to model assumptions on the function to be learned. Earlier work in the optimal recovery literature has shown that minimizing a regularized objective produces optimal solutions for a general class of problems, but did not fully identify the regularization parameter. Our main contribution provides a way to compute regularization parameters that are optimal or near-optimal (depending on the setting), specifically for graph signal processing problems. Our results offer a new interpretation for classical optimization techniques in graph-based learning and also come with new insights for hyperparameter selection. We illustrate the potential of our methods in numerical experiments on several semi-synthetic graph signal processing datasets.Comment: This paper has been accepted by 14th International conference on Sampling Theory and Applications (SampTA 2023
    • …
    corecore